Unsupervised learning with stochastic gradient

نویسندگان

  • Harold Szu
  • Ivica Kopriva
چکیده

A stochastic gradient is formulated based on deterministic gradient augmented with Cauchy simulated annealing capable to reach a global minimum with a convergence speed significantly faster when simulated annealing is used alone. In order to solve space-time variant inverse problems known as blind source separation, a novel Helmholtz free energy contrast function, H 1⁄4 E T0S; with imposed thermodynamics constraint at a constant temperature T0 was introduced generalizing the Shannon maximum entropy S of the closed systems to the open systems having non-zero input–output energy exchange E. Here, only the input data vector was known while source vector and mixing matrix were unknown. A stochastic gradient was successfully applied to solve inverse space-variant imaging problems on a concurrent pixel-bypixel basis with the unknown mixing matrix (imaging point spread function) varying from pixel to pixel. Published by Elsevier B.V.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Semi-supervised Learning with Sparse Autoencoders in Phone Classification

We propose the application of a semi-supervised learning method to improve the performance of acoustic modelling for automatic speech recognition based on deep neural networks. As opposed to unsupervised initialisation followed by supervised fine tuning, our method takes advantage of both unlabelled and labelled data simultaneously through minibatch stochastic gradient descent. We tested the me...

متن کامل

Parallel Unsupervised Feature Learning with Sparse AutoEncoder

Choosing the right input features for a particular machine learning algorithm is one of the deciding factors for a successful application but often a time consuming manual task. Unsupervised feature learning algorithms provide an alternative solution by automatically learning features; however, they are usually computationally intensive. In this project we explore different implementations of t...

متن کامل

Weight Space Probability Densities in Stochastic Learning: II. Transients and Basin Hopping Times

In stochastic learning, weights are random variables whose time evolution is governed by a Markov process. At each time-step, n, the weights can be described by a probability density function pew, n). We summarize the theory of the time evolution of P, and give graphical examples of the time evolution that contrast the behavior of stochastic learning with true gradient descent (batch learning)....

متن کامل

Stochastic Learning on Imbalanced Data: Determinantal Point Processes for Mini-batch Diversification

We study a mini-batch diversification scheme for stochastic gradient descent (SGD). While classical SGD relies on uniformly sampling data points to form a mini-batch, we propose a non-uniform sampling scheme based on the Determinantal Point Process (DPP). The DPP relies on a similarity measure between data points and gives low probabilities to mini-batches which contain redundant data, and high...

متن کامل

Generalized ensemble model for document ranking in information retrieval

A generalized ensemble model (gEnM) for document ranking is proposed in this paper. The gEnM linearly combines basis document retrieval models and tries to retrieve relevant documents at high positions. In order to obtain the optimal linear combination of multiple document retrieval models or rankers, an optimization program is formulated by directly maximizing the mean average precision. Both ...

متن کامل

Nonlinear blind source separation using a radial basis function network

This paper proposes a novel neural-network approach to blind source separation in nonlinear mixture. The approach utilizes a radial basis function (RBF) neural-network to approximate the inverse of the nonlinear mixing mapping which is assumed to exist and able to be approximated using an RBF network. A contrast function which consists of the mutual information and partial moments of the output...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Neurocomputing

دوره 68  شماره 

صفحات  -

تاریخ انتشار 2005